193 research outputs found
On Minimizing Data-read and Download for Storage-Node Recovery
We consider the problem of efficient recovery of the data stored in any
individual node of a distributed storage system, from the rest of the nodes.
Applications include handling failures and degraded reads. We measure
efficiency in terms of the amount of data-read and the download required. To
minimize the download, we focus on the minimum bandwidth setting of the
'regenerating codes' model for distributed storage. Under this model, the
system has a total of n nodes, and the data stored in any node must be
(efficiently) recoverable from any d of the other (n-1) nodes. Lower bounds on
the two metrics under this model were derived previously; it has also been
shown that these bounds are achievable for the amount of data-read and download
when d=n-1, and for the amount of download alone when d<n-1.
In this paper, we complete this picture by proving the converse result, that
when d<n-1, these lower bounds are strictly loose with respect to the amount of
read required. The proof is information-theoretic, and hence applies to
non-linear codes as well. We also show that under two (practical) relaxations
of the problem setting, these lower bounds can be met for both read and
download simultaneously.Comment: IEEE Communications Letter
CREATION OF BIOACTIVE SURFACES TO MODULATE CELL BEHAVIOR USING SURFACE INITIATED PHOTOINIFERTER-MEDIATED GRAFT PHOTOPOLYMERIZATION
Biomaterials widely used in biomedical applications still face biocompatibility issues arising from non-specific protein adsorption on the foreign surface, and the consequent undesired cell response. Emerging evidence suggests that imparting specific bioactivity to the biomaterial\u27s surface to elicit favorable response from cells, (like osseointegration of joint implants and endothelialization of stents) can yield much better biocompatibility results when combined with passive prevention of protein adsorption. In more complex diseases like spinal cord injury and cardiomyopathy, specific biomolecules are required to elicit desired cell responses for successful regeneration. However, for success of such biomolecule based strategies, the effects of various parameters (type of molecule, concentration, spatial and temporal distribution) on the behavior of target cells need to be thoroughly investigated. Surface-initiated photoiniferter-mediated polymerization (SIPMP) was selected for this study, because it: 1. can graft protein-resistant polymer (like poly(ethylene glycol) (PEG), poly(hydroxyethyl methacrylate) (HEMA)) on any biomaterial surface. 2. provides excellent control over the amount of polymer grafted. 3. allows covalent immobilization of biomolecules on the polymer chains, and 4. allows creation of spatial patterns and concentration gradients of biomolecules by spatially controlling polymer grafting. As the first step, poly(methacrylic acid) (pMAA) grafting via SIPMP was used to systematically control the hydrophilicity and the concentration of attached molecules on polyurethane surfaces by varying the iniferter concentration, monomer concentration, UV intensity and UV exposure time. In the next step, covalent conjugation of a hormone noradrenalin (NA) to pMAA and pHEMA chains grafted on glass surfaces was achieved as a means to develop a novel anti-marine biofouling surface. Accessibility and bioactivity of conjugated NA was confirmed by its deleterious effects on viability and cell structure of oyster hemocytes. Finally, thickness gradients of pMAA and pHEMA chains were created on glass surface as a means to create protein concentration gradients and study their effects on gradient-dependent cell behaviors. Preliminary experiments for controlling cell adhesion by conjugating proteins to homogeneous pHEMA layers remain inconclusive, warranting further investigation. In summary, the results obtained in this study highlight the versatility of SIPMP for high throughput analysis of cell behavior on surfaces with a wide variety of bioactive functionalities
Computer Modeling of a Rotating Detonation Engine in a Rocket Configuration
Detonation-based combustors leverage the higher thermodynamic efficiency of the Atkinson cycle compared to the traditional deflagration-based combustion of the Brayton cycle. The rotating detonation engine (RDE) has one or more shock waves rotating around an annulus. The RDE can theoretically be 20% more thermally efficient than a traditional deflagration-based cycle. A RDE was modeled in Numerical Propulsion System Simulation (NPSS) based on a model developed in Microsoft Excel. The thermodynamic analysis of the RDE in these models is broken into four streams. Empirical models were used to find the percentage of the total flow in each stream. The pre-detonation pressure was iterated until the entrance mass flow calculations matched the exit mass flow calculations. A parametric analysis was used to compare the variation in specific impulse from the NPSS model to the Microsoft Excel model and other published results. The RDE has a peak air-breathing engine specific impulse of approximately 5,500 sec and a peak rocket engine specific impulse of approximately 150 sec
When Do Redundant Requests Reduce Latency ?
Several systems possess the flexibility to serve requests in more than one
way. For instance, a distributed storage system storing multiple replicas of
the data can serve a request from any of the multiple servers that store the
requested data, or a computational task may be performed in a compute-cluster
by any one of multiple processors. In such systems, the latency of serving the
requests may potentially be reduced by sending "redundant requests": a request
may be sent to more servers than needed, and it is deemed served when the
requisite number of servers complete service. Such a mechanism trades off the
possibility of faster execution of at least one copy of the request with the
increase in the delay due to an increased load on the system. Due to this
tradeoff, it is unclear when redundant requests may actually help. Several
recent works empirically evaluate the latency performance of redundant requests
in diverse settings.
This work aims at an analytical study of the latency performance of redundant
requests, with the primary goals of characterizing under what scenarios sending
redundant requests will help (and under what scenarios they will not help), as
well as designing optimal redundant-requesting policies. We first present a
model that captures the key features of such systems. We show that when service
times are i.i.d. memoryless or "heavier", and when the additional copies of
already-completed jobs can be removed instantly, redundant requests reduce the
average latency. On the other hand, when service times are "lighter" or when
service times are memoryless and removal of jobs is not instantaneous, then not
having any redundancy in the requests is optimal under high loads. Our results
hold for arbitrary arrival processes.Comment: Extended version of paper presented at Allerton Conference 201
The MDS Queue: Analysing the Latency Performance of Erasure Codes
In order to scale economically, data centers are increasingly evolving their
data storage methods from the use of simple data replication to the use of more
powerful erasure codes, which provide the same level of reliability as
replication but at a significantly lower storage cost. In particular, it is
well known that Maximum-Distance-Separable (MDS) codes, such as Reed-Solomon
codes, provide the maximum storage efficiency. While the use of codes for
providing improved reliability in archival storage systems, where the data is
less frequently accessed (or so-called "cold data"), is well understood, the
role of codes in the storage of more frequently accessed and active "hot data",
where latency is the key metric, is less clear.
In this paper, we study data storage systems based on MDS codes through the
lens of queueing theory, and term this the "MDS queue." We analytically
characterize the (average) latency performance of MDS queues, for which we
present insightful scheduling policies that form upper and lower bounds to
performance, and are observed to be quite tight. Extensive simulations are also
provided and used to validate our theoretical analysis. We also employ the
framework of the MDS queue to analyse different methods of performing so-called
degraded reads (reading of partial data) in distributed data storage
Intravenous Immunoglobulin in the Treatment of Severe Clostridium Difficile Colitis
Intravenous immunoglobulin (IVIG) has been utilized in patients with recurrent and refractory Clostridium difficile colitis. It is increasingly being used in patients with initial clinical presentation of severe colitis. Herein, we report a case of severe C. Difficile colitis successfully treated with IVIG with a review of the medical literature to identify the optimal timing and clinical characteristics for this treatment strategy
- …